Goto

Collaborating Authors

 neural relational inference


Neural Relational Inference with Fast Modular Meta-learning

Neural Information Processing Systems

Graph neural networks (GNNs) are effective models for many dynamical systems consisting of entities and relations. Although most GNN applications assume a single type of entity and relation, many situations involve multiple types of interactions. Relational inference is the problem of inferring these interactions and learning the dynamics from observational data. We frame relational inference as a modular meta-learning problem, where neural modules are trained to be composed in different ways to solve many tasks. This meta-learning framework allows us to implicitly encode time invariance and infer relations in context of one another rather than independently, which increases inference capacity. Framing inference as the inner-loop optimization of meta-learning leads to a model-based approach that is more data-efficient and capable of estimating the state of entities that we do not observe directly, but whose existence can be inferred from their effect on observed entities. To address the large search space of graph neural network compositions, we meta-learn a proposal function that speeds up the inner-loop simulated annealing search within the modular meta-learning algorithm, providing two orders of magnitude increase in the size of problems that can be addressed.


Reviews: Neural Relational Inference with Fast Modular Meta-learning

Neural Information Processing Systems

This paper is quite unbalanced in two ways. Firstly the balance of space devoted to discussing background vs contributions is skewed too heavily towards discussing prior work, with too little focus on explaining the contributions of this work. Secondly, the coverage of the literature is heavily focused on graph networks and meta learning, but neglects to cover prior work on (non-graph based) modular networks and on learned proposal distributions. Towards the first imbalance, the section on lines 201-235 is by far the most important content in the paper, but is positioned almost as an afterthought to the extensive exposition of Alet et al. (2018). The paper would be much stronger if other sections were shortened and the descriptions in this region were substantially expanded (eg.


Reviews: Neural Relational Inference with Fast Modular Meta-learning

Neural Information Processing Systems

This paper is quite borderline. The reviewers were all learning towards acceptance, but none were willing to fight for the paper. The main concern for the paper is that the empirical performance gain over Kipf et al is quite small, and in some cases, non-existent. In the balance, I would put this paper slightly over the bar for acceptance. However, we strongly encourage the authors to try to better highlight the benefits over Kipf et al regarding unseen nodes, in the final version.


Neural Relational Inference with Fast Modular Meta-learning

Neural Information Processing Systems

Graph neural networks (GNNs) are effective models for many dynamical systems consisting of entities and relations. Although most GNN applications assume a single type of entity and relation, many situations involve multiple types of interactions. Relational inference is the problem of inferring these interactions and learning the dynamics from observational data. We frame relational inference as a modular meta-learning problem, where neural modules are trained to be composed in different ways to solve many tasks. This meta-learning framework allows us to implicitly encode time invariance and infer relations in context of one another rather than independently, which increases inference capacity.


Neural Relational Inference with Fast Modular Meta-learning

Alet, Ferran, Weng, Erica, Pérez, Tomás Lozano, Kaelbling, Leslie Pack

arXiv.org Artificial Intelligence

\textit{Graph neural networks} (GNNs) are effective models for many dynamical systems consisting of entities and relations. Although most GNN applications assume a single type of entity and relation, many situations involve multiple types of interactions. \textit{Relational inference} is the problem of inferring these interactions and learning the dynamics from observational data. We frame relational inference as a \textit{modular meta-learning} problem, where neural modules are trained to be composed in different ways to solve many tasks. This meta-learning framework allows us to implicitly encode time invariance and infer relations in context of one another rather than independently, which increases inference capacity. Framing inference as the inner-loop optimization of meta-learning leads to a model-based approach that is more data-efficient and capable of estimating the state of entities that we do not observe directly, but whose existence can be inferred from their effect on observed entities. To address the large search space of graph neural network compositions, we meta-learn a \textit{proposal function} that speeds up the inner-loop simulated annealing search within the modular meta-learning algorithm, providing two orders of magnitude increase in the size of problems that can be addressed.


Neural Relational Inference with Efficient Message Passing Mechanisms

Chen, Siyuan, Wang, Jiahai, Li, Guoqing

arXiv.org Artificial Intelligence

Many complex processes can be viewed as dynamical systems of interacting agents. In many cases, only the state sequences of individual agents are observed, while the interacting relations and the dynamical rules are unknown. The neural relational inference (NRI) model adopts graph neural networks that pass messages over a latent graph to jointly learn the relations and the dynamics based on the observed data. However, NRI infers the relations independently and suffers from error accumulation in multi-step prediction at dynamics learning procedure. Besides, relation reconstruction without prior knowledge becomes more difficult in more complex systems. This paper introduces efficient message passing mechanisms to the graph neural networks with structural prior knowledge to address these problems. A relation interaction mechanism is proposed to capture the coexistence of all relations, and a spatio-temporal message passing mechanism is proposed to use historical information to alleviate error accumulation. Additionally, the structural prior knowledge, symmetry as a special case, is introduced for better relation prediction in more complex systems. The experimental results on simulated physics systems show that the proposed method outperforms existing state-of-the-art methods.


Neural Relational Inference with Fast Modular Meta-learning

Alet, Ferran, Weng, Erica, Lozano-Pérez, Tomás, Kaelbling, Leslie Pack

Neural Information Processing Systems

Graph neural networks (GNNs) are effective models for many dynamical systems consisting of entities and relations. Although most GNN applications assume a single type of entity and relation, many situations involve multiple types of interactions. Relational inference is the problem of inferring these interactions and learning the dynamics from observational data. We frame relational inference as a modular meta-learning problem, where neural modules are trained to be composed in different ways to solve many tasks. This meta-learning framework allows us to implicitly encode time invariance and infer relations in context of one another rather than independently, which increases inference capacity.


Building Models that Learn to Discover Structure and Relations

#artificialintelligence

Some argue that a key component of human intelligence is our ability to reason about objects and their relations (e.g. This enables us, for example, to build rich compositional models of physics (how objects or particles interact) and intuitive theories of causation (what causes what) [3]. For artificial systems, these tasks remain a challenge. Most sophisticated pattern recognition models, e.g. based on Convolutional Neural Networks (CNNs) or Recurrent Neural Networks (RNNs), lack a certain relational inductive bias [4]; impeding their ability to generalize well on problems with inherent compositional structure. In our recent ICML (2018) paper: Neural Relational Inference for Interacting Systems, we explore a class of models named Graph Neural Networks (GNNs) that reflect the inherent structure of the problem domain in their model architecture¹.


Neural Relational Inference for Interacting Systems

Kipf, Thomas, Fetaya, Ethan, Wang, Kuan-Chieh, Welling, Max, Zemel, Richard

arXiv.org Machine Learning

Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics. The interplay of components can give rise to complex behavior, which can often be explained using a simple model of the system's constituent parts. In this work, we introduce the neural relational inference (NRI) model: an unsupervised model that learns to infer interactions while simultaneously learning the dynamics purely from observational data. Our model takes the form of a variational auto-encoder, in which the latent code represents the underlying interaction graph and the reconstruction is based on graph neural networks. In experiments on simulated physical systems, we show that our NRI model can accurately recover ground-truth interactions in an unsupervised manner. We further demonstrate that we can find an interpretable structure and predict complex dynamics in real motion capture and sports tracking data.